約 5,864,185 件
https://w.atwiki.jp/eathena/pages/20.html
// Athena Character configuration file. // Note "Comments" are all text on the right side of a double slash "//" // Whatever text is commented will not be parsed by the servers, and serves // only as information/reference. //訳)メモ:コメントを書く際はすべてのテキストにおいて行頭にスラッシュを二つ次のように入れてください「//」 //これをすることにより、サーバーで読み込まなくなります。注釈等で使用可能です。 // // Server Communication username and password. //サーバー間通信に使われるIDおよびパスワード userid s1 passwd p1 // Server name, use alternative character such as ASCII 160 for spaces. // NOTE Do not use spaces in the name, or guild emblems won t work client-side! //サーバーの名前。不明 //メモ 名前にスペースを使わないこと。ギルドエンブレムがクライアント側で正常動作しなくなります。 server_name KlonosServer // Wisp name for server used to send wisp from server to players (between 4 to 23 characters) //サーバーの総合的な名前(?)サーバーからプレイヤーへ束(?)を送る際に使用されます wisp_server_name Server // Login Server IP // The character server connects to the login server using this IP address. // NOTE This is useful when you are running behind a firewall or are on // a machine with multiple interfaces. //ログインサーバーのIPアドレス //キャラクターサーバーはこのアドレスを使用し、ログインサーバーへ接続します。 //メモ これは貴方がファイアーウォールを使用しているか、複数の機械を使っている場合に役に立ちます。 // login_ip 127.0.0.1 // The character server listens on the interface with this IP address. // NOTE This allows you to run multiple servers on multiple interfaces // while using the same ports for each server. //bind_ip 127.0.0.1 // Login Server Port //ログインサーバーの使用するポート login_port 6900 // Character Server IP // The IP address which clients will use to connect. // Set this to what your server s public IP address is. //キャラクターサーバーのIPアドレス //このIPアドレスはクライアントが接続する際に使用されます。 //貴方のサーバーのグローバルIPをセットしてください。 char_ip klonos.ddo.jp // Character Server Port //キャラクターサーバーの使用するポート char_port 6121 //Time-stamp format which will be printed before all messages. //Can at most be 20 characters long. //Common formats // %I %M %S %p (hour minute second 12 hour, AM/PM format) // %H %M %S (hour minute second, 24 hour format) // %d/%b/%Y (day/Month/year) //For full format information, consult the strftime() manual. //タイムスタンプのフォーマット。これはすべてのメッセージに適用されます。 //最高20文字の長さまで可能です //フォーマットの例 // %I %M %S %p (時 分 秒 12時間表示) // %H %M %S (時 分 秒, 24時間表示) // %d/%b/%Y (日/月/年) //個人的メモ 下の場合、日/月/時/分表示 //timestamp_format [%d/%b %H %M] //If redirected output contains escape sequences (color codes) stdout_with_ansisequence no //Makes server output more silent by ommitting certain types of messages //1 Hide Information messages //2 Hide Status messages //4 Hide Notice Messages //8 Hide Warning Messages //16 Hide Error and SQL Error messages. //32 Hide Debug Messages //Example "console_silent 7" Hides information, status and notice messages (1+2+4) //サーバーが出力するメッセージを規制することができます //1 Informationメッセージを隠す //2 Statusメッセージを隠す //4 Noticeメッセージを隠す //8 Warningメッセージを隠す //16 Error および SQL Error メッセージを隠す //32 Debugメッセージを隠す //例 次のような場合→"console_silent 7"information, status、noticeメッセージを隠す (1+2+4) console_silent 0 // Console Commands // Allow for console commands to be used on/off // This prevents usage of log.file console off // Option to force a player to create an e-mail. // If a player have default e-mail, and if you activate this option, the player can only connect in the game (to arrive on a map) like follow // - Create at least 1 character // - Select 1 character // - Select DEL to enter his/her e-mail. (if OK is choosen, client says to the player invalid e-mail ) // - If his/her e-mail is correct, the player enter in the game (an e-mail is saved definitively). // - If his/her e-mail is incorrect, he/she have incorrect e-mail and must select again DEL. // - After entering in the game (when the player arrives on a map), DEL and SEL/OK button work normaly for all next connections. // Resume If a player have "incorrect/invalid e-mail" when he/she click on OK button, // the player must click DEL button and register his/her NEW e-mail to enter in the game // So, default is 0, because administrator must explain to their players before to activate this option. email_creation 0 // キャラクターサーバーをメンテナンスモードにするか。 // 1を入力することでメンテナンスモードに入り、GM以外接続できないようになります char_maintenance 0 // Enable or disable creation of new characters. // Now it is actually supported [Kevin] // 新しいキャラクターの作成を許可するかしないか。 // これはKevin氏によりサポートされています。 char_new 1 // Display (New) in the server list. // 新しいサーバーリストを見せるか(?) char_new_display 0 // Maximum users able to connect to the server. Set to 0 for unlimited. // サーバーに接続できる最大人数。 // 無制限にするには「0」を入力してください。 max_connect_user 0 // When set to yes, the char server will refuse connections from players already online. // When a login attempt is rejected, the account in question will be booted from all the connected map servers. // Note that this only works within the char-server and it s connected mapservers, // the charserver cannot know if the same account is logged on in other char servers. // it s safe to turn off if the char-server only has a single map-server connected to it. // yesにセットすると、キャラクターサーバーはすでにログインしているプレイヤーからの接続を拒否します。 // ログイン試行が拒否される時、問題のアカウントは、すべてのMAPサーバーから排除されます。 // Note that this only works within the char-server and it s connected mapservers, // キャラサーバーは、同じアカウントが他のキャラサーバーでログオンされるかどうかわかることができません。 // online_check yes // Minimum GM level that is allowed to bypass the server limit of users. // 接続人数制限を無視できるGMのレベル gm_allow_level 99 // How often should the server save all files? (In seconds) // Note Applies to all data files on TXT servers. // On SQL servers, it applies to guilds (character save interval is defined on the map config) // どのぐらいの時間でサーバーはsavedataを保存するか (秒単位) // Note Applies to all data files on TXT servers. // On SQL servers, it applies to guilds (character save interval is defined on the map config) autosave_time 60 // Display information on the console whenever characters/guilds/parties/pets are loaded/saved? // キャラ、ギルド、パーティー、ペットをロードまたはセーブした時にログを表示するかどうか save_log yes // Character server flatfile database // キャラクター(キャラサーバー)のセーブファイル char_txt save/athena.txt // Friends list flatfile database // 友達リストのセーブファイル friends_txt save/friends.txt // Start point, Map name followed by coordinates (x,y) // キャラ作成後降り立つMAP、.gatは必要ないようだ start_point new_1-1,53,111 // Starting weapon for new characters // 新しいキャラクターがはじめから持っている武器 start_weapon 1201 // Starting armor for new characters // 新しいキャラクターがはじめから持っている防具 start_armor 2301 // Starting zeny for new characters // 新しいキャラクターがはじめから持っているZeny start_zeny 0 // Size for the fame-lists // 名声リストのランキング数(上からアルケミ、黒スミス、テコン) fame_list_alchemist 10 fame_list_blacksmith 10 fame_list_taekwon 10 // Guild earned exp modifier. // Adjusts taxed exp before adding it to the guild s exp. For example, if set // to 200, the guild receives double the player s taxed exp. // ギルドexpの変更 // ギルドに必要なexpを変更します。たとえば200に設定すれば2倍の経験値が必要になります。 guild_exp_rate 100 // Name used for unknown characters // 存在しないキャラクターに使う名前 unknown_char_name Unknown // To log the character server? // キャラサーバーのログを残すか。 log_char 1 // Log Filename // ログのファイルの名前 char_log_filename log/char.log // Allow or not identical name for characters but with a different case (upper/lower) // example Test-test-TEST-TesT; Value 0 not allowed (default), 1 allowed name_ignoring_case no // Manage possible letters/symbol in the name of charater. Control character (0x00-0x1f) are never accepted. Possible values are // NOTE Applies to character, party and guild names. // 0 no restriction (default) // 1 only letters/symbols in char_name_letters option. // 2 Letters/symbols in char_name_letters option are forbidden. All others are possibles. char_name_option 0 // Set the letters/symbols that you want use with the char_name_option option. // Note Don t add spaces unless you mean to add space to the list. char_name_letters abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890 // Character rename option. When set to yes, the server will send an extended // char-info packet, informing whether the character can be renamed or not. // NOTE This functionality is not implemented. // NOTE This option is for compatibility with kRO sakray 2006-10-23 and newer. // !Do not use it for any other type of client since it will crash them! // キャラクター名称変更設定。 yesにセットすると、 サーバーはキャラクターに関するパケットを送信します。 // キャラクターは名称変更をできる、またはできないようになります。 // メモ この機能は以下の条件を満たしていなければ使用できません。 // メモ このオプションは kRO sakray 2006-10-23 より新しい蔵でなければ使用できません。 // これら以外の蔵では絶対に使用しないでください、クラッシュするよ! char_rename yes // How many Characters are allowed per Account ? (0 = disabled) [SQL Only!] chars_per_account 0 // Restrict character deletion by BaseLevel // 0 no restriction (players can delete characters of any level) // -X you can t delete chars with BaseLevel = X // Y you can t delete chars with BaseLevel = Y // e.g. char_del_level 80 (players can t delete characters with 80+ BaseLevel) // ベースレベルによるキャラクター削除制限 // 0 規制をかけない。 (プレイヤーはどんなレベルでもキャラを削除できます。) // -X ベースレベルがX以上にならなければ削除できない // Y ベースレベルがY以上だと削除できない // 例: char_del_level 80 (プレイヤーはレベル80以上のキャラクターを消すことができない。) char_del_level 0 // What folder the DB files are in (item_db.txt, etc.) // データベースファイ(db)のフォルダの場所 db_path db //NOTE The following online listing options are only for TXT servers. // Filename of the file which receives the online players list in text // オンラインプレイヤーのリストを受け取るファイルの名前 online_txt_filename online.txt // Filename of the file which receives the online players list, but in html version // オンラインプレイヤーのリストを受け取るファイルの名前(HTML板) online_html_filename online.html // Choose how to display online players. // (sorting operation with a lot of online players can take time on a slow computer) // 0 no sorting (default) // 1 by alphabetical order of their name // 2 by number of their zenys // 3 by their base level // 4 by their job (and job level inside the same job) // 5 by alphabetical order of their actual map location // オンラインプレーヤーを示表示する方法を選んでください。 // (sorting operation with a lot of online players can take time on a slow computer) // 0 no sorting (default) // 1 by alphabetical order of their name // 2 by number of their zenys // 3 by their base level // 4 by their job (and job level inside the same job) // 5 by alphabetical order of their actual map location online_sorting_option 0 // Choose which columns that you want display in the online files. Do the addition of these values // (if value is 0, no file is done) // 1 name (just the name, no function like GM ) // 2 job // 4 levels // 8 map name // 16 mapname and coordonates // 32 zenys // 64 name (with GM if the player is a GM) // default value 1 (only name) online_display_option 1 // minimum GM level to display GM when we want to display it (default 1) // GMがログインしているかどうかを見れるGMのレベル online_gm_display_min_level 20 // refresh time (in sec) of the html file in the explorer (default 20) online_refresh_html 20 import conf/import/char_conf.txt
https://w.atwiki.jp/wiki5_hks/pages/39.html
OUTPUT ファイルからの情報抽出 Web 上で実行記事 記事(追記) 実行場所 http //homepage1.nifty.com/hkasai/NMonal/NMonal.html 複数の OUTPUT ファイルから情報を抽出 あるフォルダ内にある OUTPUT ファイルすべてから必要な情報を抽出する.フォルダ内に OUTPUT ファイル以外のファイルが保存されていても構わない.OUTPUT ファイルのみを認識して,以下の作業が自動的に行われる. 使い方1. 下のプログラム (ReadNMoutMulti.pl および NMoutAnal.pl) をひとつのフォルダに保存する 2. Perl をインストールしておく.(Perl 参照) 3. 以下のように実行する.結果は CSV ファイルに出力される.Summary.csv は使用者が任意につける名前.何でもよい. ### カレントフォルダの OUTPUT をすべて対象とする場合 ### C \nmv\run perl ReadNMoutMulti.pl . Summary.csv ### カレントフォルダの下の OUT フォルダの中のファイルを対象とする場合 ### C \nmv\run perl ReadNMoutMulti.pl out Summary.csv 注意 COV ステップのエラー検出には未対応です.== 対応しました. 結果の保証はいたしかねます. ReadNMoutMulti.pl use strict; require NMoutAnal.pl ; my $ThisScriptName = "ReadNMoutMulti.pl"; if (@ARGV != 1) { print "Usage perl $ThisScriptName FOLDER_name\n\n"; die; } my $dirname = shift @ARGV; my @result_all; opendir(DIR, $dirname) or die "$dirname $!"; while (defined(my $fname = readdir(DIR))) { next if ($fname =~ /pl$/); next unless (-f "$dirname\\$fname"); open(FILE, "$dirname\\$fname") or die "$dirname\\$fname $!"; my $isOutputFile = 0; while (defined(my $line = FILE )) { if ($line =~ "DEVELOPED AND PROGRAMMED BY STUART BEAL AND LEWIS SHEINER" ) { $isOutputFile = 1; } } close(FILE); if ($isOutputFile) { my %result1 = NMoutAnal("$dirname\\$fname"); $result1{ FNAME } = $fname; push(@result_all, \%result1); } } closedir(DIR); my $max_ntheta = 0; my $max_nomega = 0; my $max_nsigma = 0; foreach my $res1 (@result_all) { if ($res1- { NTHETA } $max_ntheta) { $max_ntheta = ${$res1}{ NTHETA }; } if ($res1- { NOMEGA } $max_nomega) { $max_nomega = ${$res1}{ NOMEGA }; } if ($res1- { NSIGMA } $max_nsigma) { $max_nsigma = ${$res1}{ NSIGMA }; } } print "Output,EST Status,COV Status,NIndiv,Nobs,OBJ"; for (1..$max_ntheta) { print ",TH$_"; } for (1..$max_nomega) { print ",OM$_"; } for (1..$max_nsigma) { print ",SG$_"; } for (1..$max_ntheta) { print ",SETH$_"; } for (1..$max_nomega) { print ",SEOM$_"; } for (1..$max_nsigma) { print ",SESG$_"; } print "\n"; my @values; my $COV_status; foreach my $res1 (@result_all) { $COV_status = "Success"; if ($res1- { COV_STATUS } == 0) { $COV_status = "Failure"; } elsif ($res1- { COV_STATUS } == -1) { $COV_status = "Not Implemented"; } print "$res1- { FNAME }"; print ",$res1- { STATUS }"; print ",$COV_status"; #COV status print ",$res1- { NINDIV }"; print ",$res1- { NOBS }"; print ",$res1- { OBJ }"; @values = ConvEStrToNum($res1- { THETA }); print ",", join(",", @values); if ($res1- { NTHETA } $max_ntheta) { foreach my $i (($res1- { NTHETA } + 1)..$max_ntheta) { print ",."; } } @values = ConvEStrToNum($res1- { OMEGA }); print ",", join(",", @values); if ($res1- { NOMEGA } $max_nomega) { foreach my $i (($res1- { NOMEGA } + 1)..$max_nomega) { print ",."; } } @values = ConvEStrToNum($res1- { SIGMA }); print ",", join(",", @values); if ($res1- { NSIGMA } $max_nsigma) { foreach my $i (($res1- { NSIGMA } + 1)..$max_nsigma) { print ",."; } } @values = ConvEStrToNum($res1- { SE_THETA }); print ",", join(",", @values); if ($res1- { NTHETA } $max_ntheta) { foreach my $i (($res1- { NTHETA } + 1)..$max_ntheta) { print ",."; } } @values = ConvEStrToNum($res1- { SE_OMEGA }); print ",", join(",", @values); if ($res1- { NOMEGA } $max_nomega) { foreach my $i (($res1- { NOMEGA } + 1)..$max_nomega) { print ",."; } } @values = ConvEStrToNum($res1- { SE_SIGMA }); print ",", join(",", @values); if ($res1- { NSIGMA } $max_nsigma) { foreach my $i (($res1- { NSIGMA } + 1)..$max_nsigma) { print ",."; } } print "\n"; } NMoutAnal.pl use strict; sub NMoutAnal { if (@_ != 1) { print "Irregular arguments.\n\n"; die; } my $output_file = shift @_; if (not -e $output_file) { print "File not found $output_file\n\n"; die; } open(OUTPUT, $output_file) or die "$!"; my @lines = OUTPUT ; my $nlines = @lines; close(OUTPUT); my %result; my $problem; my $tmp; my $nobs; my $nindiv; my $del; my $status; my $obj; my @theta; my $ntheta; my $str; my @omega; my $nomega; my @sigma; my $nsigma; my @se_theta; my @se_omega; my @se_sigma; foreach my $i (0..($nlines - 1)) { if ($lines[$i] =~ /PROBLEM NO./) { $i++; chomp($lines[$i]); $problem = $lines[$i]; $problem =~ s/^ +//; $problem =~ s/ +$//; $result{ PROBLEM } = $problem; } elsif ($lines[$i] =~ /OBS RECS/) { chomp($lines[$i]); ($tmp, $nobs) = split(/ +/, $lines[$i]); $result{ NOBS } = $nobs; } elsif ($lines[$i] =~ /OF INDIVIDUALS/) { chomp($lines[$i]); ($tmp, $nindiv) = split(/ +/, $lines[$i]); $result{ NINDIV } = $nindiv; } elsif ($lines[$i] =~ /MINIMIZATION/) { chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq 0 ) { $del = 1; } $status = substr($lines[$i], $del); $result{ STATUS } = $status; } elsif ($lines[$i] =~ /MINIMUM VALUE OF OBJ/) { do { $i++; } until ($lines[$i] =~ /[\-\d\.]+/); $obj = $ ; $result{ OBJ } = $obj; } elsif ($lines[$i] =~ /FINAL PARAMETER ESTIMATE/) { do { $i++; } until ($lines[$i] =~ /TH 1/); $i += 2; chomp($lines[$i]); @theta = split( , $lines[$i]); $ntheta = @theta; $result{ NTHETA } = $ntheta; $result{ THETA } = "@theta"; if ($lines[$i + 4] =~ /OMEGA/) { do { $i++; } until ($lines[$i] =~ /^ ?ETA1/); $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @omega = split( , $str); $i += 2; while ($lines[$i] =~ /^ ?ETA/) { $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @omega = (@omega, split( , $str)); $i += 2; }; $nomega = @omega; $result{ NOMEGA } = $nomega; $result{ OMEGA } = "@omega"; } if ($lines[$i + 2] =~ /SIGMA/) { do { $i++; } until ($lines[$i] =~ /^ ?EPS1/); $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @sigma = split( , $str); $i += 2; while ($lines[$i] =~ /^ ?EPS/) { $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @sigma = (@sigma, split( , $str)); $i += 2; }; $nsigma = @sigma; $result{ NSIGMA } = $nsigma; $result{ SIGMA } = "@sigma"; } } elsif ($lines[$i] =~ /STANDARD ERROR/) { do { $i++; } until ($lines[$i] =~ /TH 1/); $i += 2; chomp($lines[$i]); @se_theta = split( , $lines[$i]); $result{ SE_THETA } = "@se_theta"; if ($lines[$i + 4] =~ /OMEGA/) { do { $i++; } until ($lines[$i] =~ /^ ?ETA1/); $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @se_omega = split( , $str); $i += 2; while ($lines[$i] =~ /^ ?ETA/) { $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @se_omega = (@se_omega, split( , $str)); $i += 2; }; $result{ SE_OMEGA } = "@se_omega"; } if ($lines[$i + 2] =~ /SIGMA/) { do { $i++; } until ($lines[$i] =~ /^ ?EPS1/); $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @se_sigma = split( , $str); $i += 2; while ($lines[$i] =~ /^ ?EPS/) { $i++; chomp($lines[$i]); $del = 0; if (substr($lines[$i], 0, 1) eq + ) { $del = 1; } $str = substr($lines[$i], $del); @se_sigma = (@se_sigma, split( , $str)); $i += 2; }; $result{ SE_SIGMA } = "@se_sigma"; } } } # COV ステップエラーの検出 my $COV_status = -1; # NOT IMPLEMENTED foreach my $line (@lines) { if ($line =~ /COVARIANCE STEP OMITTED.*$/) { my ($tmp, $COV_exec) = split(/ /, $ ); if ($COV_exec =~ /NO/) { $COV_status = 1; # IMPLEMENTED } } } if ($COV_status == 1) { # COV ステップが実行されている if (!($se_theta[0] 0)) { $COV_status = 0; } } $result{ COV_STATUS } = $COV_status; return %result; } sub ConvEStrToNum { my $str = shift @_; my @value = split(/ /, $str); foreach (@value) { $_ += 0; } return (@value); } 1;
https://w.atwiki.jp/kurosuke_se_zi/pages/22.html
Logging in Tomcat Important note デフォルトでは、java.util.loggingだけがtomcatで利用されています。 Tomcatのログ出力はhardcodedされており、ログ出力モジュールを入れ替えることによりほかのログ出力に入れ替えることができます。 代わりのコンポーネントをビルドするか、ダウンロードするかによって、ログ出力モジュールを埋め込むことができ、使用可能となります。 extras componetsのドキュメントを参照してください。 それらに、詳細なログモジュールの埋め込みに関する記述がされています。 Introduction Tomcat6.0ではCommons Logingを内部的なコードで使用していますが、 開発者が必要とするたとえばlog4jのようなログモジュールを使用することもできます。 Commons LoggingはTomcatに階層構造をもったログ出力とログレベル設定を行う能力を提供します。 特定の出力に依存したコードを記述する必要はありません。 Tomcat6.0と5.0との大きな違いとして Context タグの要素としての Logger タグが使用できなくなりました。 代わりに、デフォルトのtomcatのログ出力設定はjava.util.loggingになりました。 もし、開発者がTomcatの内部的に出力しているログを見たい場合は、以下に説明するようにjava.util.logging もしくはLog4jの設定を行えばよい。 log4j Tomcat6.0はアプリケーションから出力される実行時例外/スタックトレースログに関連するlogをlocalhost_logとして出力しています。 これらのタイプの例外は、通常捕捉されなかった例外によって発生しますが、開発者にはとっては有益なものです。 それらはstdoutログでそれらを見つけることができます。 Tomcatのコードから関連する詳細なログ出力をセットアップする必要があるなら、あなたは簡単なlog4j構成を使うことができます。 あなたが選んだログレベルによって、このログ出力が非常に冗長である場合があることに注意してください。 また、log4jログ出力構成がスタックトレースタイプログ出力を起こさないだろうことに注意してください それらのスタックトレースは、上で議論するように、stdoutへの出力です。 以下のステップに従って、Tomcat内部のログを出力をtomcat.logというファイルに出力するしてみましょう。 1. log4j.propertiesという名前で以下の内容を記述したファイルを$CATALINA_HOME/lib の中に保存してください。 log4j.rootLogger=debug, R log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=${catalina.home}/logs/tomcat.log log4j.appender.R.MaxFileSize=10MB log4j.appender.R.MaxBackupIndex=10 log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n log4j.logger.org.apache.catalina=DEBUG, R 2.Log4Jをダウンロード(V1.2以降のもの)を行い、$CATALINA_HOME/lib の中に配置 3.ビルドするか、ダウンロードした追加ログパッケージを準備する (詳細はextras components documentaitonを参照すること) 4.$CATALINA_HOME/bin/tomcat-juli.jarを3で準備したtomcat-juli.jarと入れ替える。 5.3で準備したtomcat-juli-adapters.jar を $CATALINA_HOME/libに配置する 6.Tomcatを起動 この設定では、tomcat.logのサイズが10MBになればローテートを行い10個のバックアップを取るという設定になっています。ログの出力レベルはDEBUGで、tomcatのログとしては一番 詳細な出力となります。 パッケージ単位でもっと限定的にログの設定を行うこともできます。 Tomcat6はEngineとHost名でログ出力を定義しています。 たとえば、デフォルトのCatalina localhost logの場合以下のようにlog4.propertiesに追記してください。 XMLベースのlog4jの場合、[]シカクカッコを使っての名前変換の問題がありますので、 log4Jの将来バージョンにてこの問題が解決するまでは、properties file形式の記述をお勧めします。 log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=DEBUG, R log4j.logger.org.apache.catalina.core=DEBUG, R log4j.logger.org.apache.catalina.session=DEBUG, R DEBUGレベルでのTomcatの稼働はメガバイト単位のログを吐き出し、Tomcatの起動を遅くしてしまいます。 必要なときのみに使用してください。 各自固有のWeb Applicationに関しては各自のlog4jの設定を行うべきです。 そして、その設定はそのApplicationのみに有効な設定となります。 ここで説明されているようにlog4j.propertiesを設定してそのapplicationのWEB-INF/classesに配置し、log4jのjarをWEB-INF/libに配置してください。 そうすれば、指定されたパッケージはレベル分けして出力されるでしょう。 log4jに関しての基本的な設定に関してしか記述していません、詳細なオプションに関してはlog4jのドキュメントを参照してください。このページは、きっかけをガイドしているのみです。 以下、未訳 java.util.logging The default implemenatation of java.util.logging provided in the JDK is too limited to be useful. A limitation of JDK Logging appears to be the inability to have per-web application logging, as the configuration is per-VM. As a result, Tomcat will, in the default configuration, replace the default LogManager implementation with a container friendly implementation called JULI, which addresses these shortcomings. It supports the same configuration mechanisms as the standard JDK java.util.logging, using either a programmatic approach, or properties files. The main difference is that per-classloader properties files can be set (which enables easy redeployment friendly webapp configuration), and the properties files support slightly extended constructs which allows more freedom for defining handlers and assigning them to loggers. JULI is enabled by default in Tomcat 6.0, and supports per classloader configuration, in addition to the regular global java.util.logging configuration. This means that logging can be configured at the following layers In the JDK s logging.properties file. Check your JAVA_HOME environment setting to see which JDK Tomcat is using. The file will be in $JAVA_HOME/jre/lib. Alternately, it can also use a global configuration file located elsewhere by using the system property java.util.logging.config.file, or programmatic configuration using java.util.logging.config.class. In each classloader using a logging.properties file. This means that it is possible to have a configuration for the Tomcat core, as well as separate configurations for each webapps which will have the same lifecycle as the webapps. The default logging.properties specifies a ConsoleHandler for routing logging to stdout and also a FileHandler. A handler s log level threshold can be set using SEVERE, WARNING, INFO, CONFIG, FINE, FINER, FINEST or ALL. The logging.properties shipped with JDK is set to INFO. You can also target specific packages to collect logging from and specify a level. Here is how you would set debugging from Tomcat. You would need to ensure the ConsoleHandler s level is also set to collect this threshold, so FINEST or ALL should be set. Please refer to Sun s java.util.logging documentation for the complete details. org.apache.catalina.level=FINEST The configuration used by JULI is extremely similar, but uses a few extensions to allow better flexibility in assigning loggers. The main differences are A prefix may be added to handler names, so that multiple handlers of a single class may be instantiated. A prefix is a String which starts with a digit, and ends with . . For example, 22foobar. is a valid prefix. As in Java 5.0, loggers can define a list of handlers using the loggerName.handlers property. By default, loggers will not delegate to their parent if they have associated handlers. This may be changed per logger using the loggerName.useParentHandlers property, which accepts a boolean value. The root logger can define its set of handlers using a .handlers property. System property replacement for property values which start with ${systemPropertyName}. Example logging.properties file to be placed in $CATALINA_BASE/conf handlers = 1catalina.org.apache.juli.FileHandler, 2localhost.org.apache.juli.FileHandler, \ 3manager.org.apache.juli.FileHandler, 4admin.org.apache.juli.FileHandler, \ java.util.logging.ConsoleHandler .handlers = 1catalina.org.apache.juli.FileHandler, java.util.logging.ConsoleHandler ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ 1catalina.org.apache.juli.FileHandler.level = FINE 1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs 1catalina.org.apache.juli.FileHandler.prefix = catalina. 2localhost.org.apache.juli.FileHandler.level = FINE 2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs 2localhost.org.apache.juli.FileHandler.prefix = localhost. 3manager.org.apache.juli.FileHandler.level = FINE 3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs 3manager.org.apache.juli.FileHandler.prefix = manager. 4admin.org.apache.juli.FileHandler.level = FINE 4admin.org.apache.juli.FileHandler.directory = ${catalina.base}/logs 4admin.org.apache.juli.FileHandler.prefix = admin. java.util.logging.ConsoleHandler.level = FINE java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = \ 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager].handlers = \ 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/admin].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/admin].handlers = \ 4admin.org.apache.juli.FileHandler # For example, set the com.xyz.foo logger to only log SEVERE # messages #org .apache.catalina.startup.ContextConfig.level = FINE #org .apache.catalina.startup.HostConfig.level = FINE #org .apache.catalina.session.ManagerBase.level = FINE Example logging.properties for the servlet-examples web application to be placed in WEB-INF/classes inside the web application handlers = org.apache.juli.FileHandler, java.util.logging.ConsoleHandler ############################################################ # Handler specific properties. # Describes specific configuration info for Handlers. ############################################################ org.apache.juli.FileHandler.level = FINE org.apache.juli.FileHandler.directory = ${catalina.base}/logs org.apache.juli.FileHandler.prefix = servlet-examples. java.util.logging.ConsoleHandler.level = FINE java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
https://w.atwiki.jp/apache_reading/pages/25.html
/** * Open the specified file. * @param newf The opened file descriptor. * @param fname The full path to the file (using / on all systems) * @param flag Or ed value of * PRE * APR_READ open for reading * APR_WRITE open for writing * APR_CREATE create the file if not there * APR_APPEND file ptr is set to end prior to all writes * APR_TRUNCATE set length to zero if file exists * APR_BINARY not a text file (This flag is ignored on * UNIX because it has no meaning) * APR_BUFFERED buffer the data. Default is non-buffered * APR_EXCL return error if APR_CREATE and file exists * APR_DELONCLOSE delete the file after closing. * APR_XTHREAD Platform dependent tag to open the file * for use across multiple threads * APR_SHARELOCK Platform dependent support for higher * level locked read/write access to support * writes across process/machines * APR_FILE_NOCLEANUP Do not register a cleanup with the pool * passed in on the EM pool /EM argument (see below). * The apr_os_file_t handle in apr_file_t will not * be closed when the pool is destroyed. * APR_SENDFILE_ENABLED Open with appropriate platform semantics * for sendfile operations. Advisory only, * apr_socket_sendfile does not check this flag. * /PRE * @param perm Access permissions for file. * @param pool The pool to use. * @remark If perm is APR_OS_DEFAULT and the file is being created, * appropriate default permissions will be used. * @remark By default, the returned file descriptor will not be * inherited by child processes created by apr_proc_create(). This * can be changed using apr_file_inherit_set(). */ APR_DECLARE(apr_status_t) apr_file_open(apr_file_t **newf, const char *fname, apr_int32_t flag, apr_fileperms_t perm, apr_pool_t *pool);
https://w.atwiki.jp/spring_atoz/
はじめに このサイトは、Spring Frameworkを0からマスターするまで、メモを残しておきます。 あくまで、メモベースなので、これからSpring Frameworkをマスターする人にはお勧め 環境準備 1.環境変数にHOMEを追加する。具体的には、[コンピュータ]-[プロパティ]-[システムの詳細設定]-[環境変数]で行う。 自分のユーザディレクトリでもよい。 2.http //mergedoc.sourceforge.jp/より、「Pleiades All in One 4.2.1 Java Full Edition x64」をダウンロードし、D \pleiades\にインストール 3.ecpliseを起動し、workspaceをD \pleiades\workspaceを指定する。 4.[ヘルプ]-[Eclipseマーケットプレイス]-[SpringSource Tool Suite]をインストール Spring MVCでHello Worldの作り方 1.[ファイル]-[新規]-[Spring]-[Spring Template Project]-[Spring MVC Project]を選択 2.プロジェクト名を「springmvc_helloworld」、パッケージ名を「jp.sample.spring_mvc.helloworld」 ※パッケージ名は、会社のドメインを逆転させ、その下に名前を付けるのが一般的 3.パッケージ・エクスプローラーで、右クリックで[実行]-[サーバーで実行]-[Apache Tomcat v7.0]を選択し、 Tomcatインストール・ディレクトリを、D \pleiades\tomcat\7.0に指定し、サーバ実行する。 ※[ウィンドウ]-[ビューの表示]-[その他]-[サーバー]を選択すると、tomcatを起動、停止ができる。 4.http //localhost 8080/spring_helloworldで実行すると、Hello Worldが表示される。 Spring BatchでHello Worldの作り方 1.[ファイル]-[新規]-[Spring]-[Spring Template Project]-[Simple Spring Batch Project]を選択 2.プロジェクト名を「springmvc_helloworld」、パッケージ名を「jp.sample.spring_mvc.helloworld」 3.パッケージ・エクスプローラーで、右クリックで[実行]-[Maven clean]、[Maven install]でjarを作成 ※cleanをしないと、jarが見つからない状態になる。 4.パッケージ・エクスプローラーで、右クリックで[実行]-[実行の構成]を開き、 [Javaアプリケーション]-[CodeSwitcher]で、以下の設定を行い、実行を押すとHello Worldが表示される。 ・メイン・クラス=org.springframework.batch.core.launch.support.CommandLineJobRunner ・プログラムの引数=classpath /launch-context.xml job1 5.コマンドラインから実行する場合は、パッケージ・エクスプローラーで、右クリックで[実行]-[実行の構成]を開き、 [Maven install]を実行する。その後、pom.xmlを修正し、jar→warに変更して、再度[Maven install]を実行。その後は、以下のコマンドに従う。 cd D \pleiades\workspace\springbatch_helloworld\target\classes copy ..\spring-batch-simple-2.0.0.CI-SNAPSHOT.jar . set set classpath=D \pleiades\workspace\springbatch_helloworld\target\spring-batch-simple-2.0.0.CI-SNAPSHOT\WEB-INF\lib\*;D \pleiades\workspace\springbatch_helloworld\target\classes\* java org.springframework.batch.core.launch.support.CommandLineJobRunner /launch-context.xml job1 ※実際の環境に合わせて、classpathおよびlunch-context.xmlのパスを修正する。 [Spring Batch]log4jの使い方 ログの出力方法をコントロールし、log4j.propertiesを変更することにより、ログの出力方法を変更することができる。 エラーレベルは、DEBUG, INFO, WARN, ERROR, FATALの5つがあり、パッケージレベルで指定可能。 # コンソールへログを出力(デフォルトの指定) log4j.rootCategory=ERROR, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m %n # ログファイルへ出力(D \test.logへ10MBで分割、最大50ファイル) log4j.rootCategory=ERROR, fileout log4j.appender.fileout=org.apache.log4j.RollingFileAppender log4j.appender.fileout.File=D /test.log log4j.appender.fileout.MaxFileSize=10MB log4j.appender.fileout.MaxBackupIndex=50 log4j.appender.fileout.layout=org.apache.log4j.PatternLayout log4j.appender.fileout.layout.conversionPattern=%d{yyyy/MM/dd HH mm ss.SSS} [%p] - %m%n # DBへ出力(oracleのLOG_TABLEへ出力) log4j.rootCategory=ERROR, dbout log4j.category.com.fc2web.himtodo.test=DEBUG, TEST log4j.appender.dbout=org.apache.log4j.jdbc.JDBCAppender log4j.appender.dbout.URL=jdbc oracle thin @127.0.0.1 1521 TEST log4j.appender.dbout.user=test log4j.appender.dbout.password=test log4j.appender.dbout.driver=oracle.jdbc.driver.OracleDriver log4j.appender.dbout.bufferSize=1 log4j.appender.dbout.layout=org.apache.log4j.PatternLayout log4j.appender.dbout.layout.conversionPattern=INSERT INTO LOG_TABLE VALUES ( %d{yyyy-MM-dd HH mm ss.SSS} , %p , %m ) # 実行した結果のみ出力する。 log4j.rootCategory=ERROR, stdout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%m log4j.logger.jp=INFO # 他にも、Unix Syslog デーモン、NT イベントログへ出力が可能。 [Spring Batch]Mavenでpostgresを追加し、DBとの連携を行う。 SpringBatchでは実行状態を保存するために、DBを使用する。そこで、postgresのJDBCをmavenで追加し、 batch.propertiesを設定する。 ※この設定をすると、毎回パラメータ(date=2012/1/1 name=1など)を変更しないと実行できなくなるので、要注意。 強制実行する場合は、引数に-nextを入れる。 1.postgresインストール後、pgAdminから新しいデータベース(springbatch)を作成する。 2.パッケージ・エクスプローラーで、右クリックで[Maven]-[依存関係の追加]を開く。 3.パターン(*)の入力で、「postgres」を入力すると、検索結果に「postgres postgres」が出てくるので、 その中から最新版を選択し、[OK]を押すと、postgresのJDBCドライバが追加される。 4.batch.propertiesを以下のように書き換え、バッチプログラムを実行すると、DBにテーブルが作成される。 batch.jdbc.driver=org.postgresql.Driver batch.jdbc.url=jdbc postgresql //localhost 5432/springbatch # use this one for a separate server process so you can inspect the results # (or add it to system properties with -D to override at run time). # batch.jdbc.url=jdbc hsqldb hsql //localhost 9005/samples batch.jdbc.user=postgres batch.jdbc.password= postgresで指定したパスワード batch.schema= batch.schema.script=classpath /org/springframework/batch/core/schema-postgresql.sql 5.2回目以降の実行で、テーブル作成は不要であるため、launch-context.xmlの以下の項目をコメントアウトする。 !-- jdbc initialize-database data-source="dataSource" jdbc script location="${batch.schema.script}" / /jdbc initialize-database --
https://w.atwiki.jp/matchmove/pages/95.html
Merging Files and Tracks When you are working on scenarios with multiple shots or objects, you may wish to combine different SynthEyes .sni files together. For example, you may track a wide reference shot, and want to use those trackers as indirect links for several other shots. You can save the tracked reference shot, then use the File/Merge option to combine it with each of several other files. Alternatively, you can transfer 2-D or 3-D data from one file to another, in the process making a variety of adjustments to it as discussed in the second subsection. You can track a file in several different auto-track sections, and recombine them using the scripts. File/Merge After you start File/Merge and select a file to merge, you will be asked whether or not to rename the trackers as necessary, to make them unique. If the current scene has Camera01 with trackers Tracker01 to Tracker05, and the scene being merged also has Camera01 with trackers Tracker01 to Tracker05, then answering yes will result in Camera01 with Tracker01 to Tracker05 and Camera02 with Tracker06 to Tracker10. If you answer no, Camera01 will have Tracker01 to Tracker05 and Camera02 will also have (different) Tracker01 to Tracker05, which is more confusing to people than machines. As that example shows indirectly, cameras, objects, meshes, and lights are always renamed to be unique. Renaming is always done by appending a number if the incoming and current scenes both have a TrashCan, the incoming one will be renamed to TrashCan1. If you are combining a shot with a previously-tracked reference, you will probably want to keep the existing tracker names, to make it easiest to find matching ones. Otherwise, renaming them with yes is probably the least confusing unless you have a particular knowledge of the TrackerNN assignments (in which case, giving them actual names such as Scuff1 is probably best). You might occasionally track one portion of a shot in one scene file, and track a different portion of the same shot in a separate file. You can combine the scene files onto a single camera as follows Open the first shot File/Merge the second shot. Answer yes to make tracker names unique (important!) Select Camera02 from the Shot menu. Hit control-A to select all its trackers. Go to the Coordinate System Panel . Change the trackers’ host object from Camera02 to *Camera01. (The * before the camera name indicates that you are moving the tracker to a different, but compatible, shot.) Delete any moving objects, lights, or meshes attached to Camera02. Select Remove Object on the Shot menu to delete Camera02. All the trackers will now be on the single Camera01. Notice how Remove Object can be used to remove a moving object or a camera and its shot. In each case, however, any other moving objects, trackers, lights, meshes, etc, must be removed first or the Remove Object will be ignored. Tracker Data Transfer You can transfer tracking data from file to file using SynthEyes export scripts, File/Export/Export 2-D Tracker Paths, and File/Import/Import 2-D Tracker Paths. These scripts can be used to interchange with other programs that support similar tracking data formats. The scripts can be used to make a number of remedial transforms as well, such as repairing track data if the source footage is replaced with a new version that is cropped differently. The simple data format, a tracker name, frame number, horizontal and vertical positions, and an optional status code, also permits external manipulations by UNIX-style scripts and even spreadsheets. Exporting Initiate the Export 2-D Tracker Paths script, select a file, and a script-generated dialog box will appear As can be seen, it affords quite a bit of control. The first three fields control the range of frames to be exported, in this case, frames 10 from 15. The offset allows the frame number in the file to be somewhat different, for example, -10 would make the first exported frame appear to be frame zero, as if frame 10 was the start of the shot. The next four fields, two scales and two offsets, manipulate the horizontal (U) and vertical (V) coordinates. SynthEyes defines these to range from -1 to +1 and from left to right and top to bottom. Each coordinate is multiplied by its scale and then the offset added. The normal defaults are scale=1 and offset=0. The values of 0.5 and 0.5 shown rework the ranges to go from 0 to 1, as may be used by other programs. A scale of -0.5 would change the vertical coordinate to run from bottom to top, for example. The scales and offsets can be used for a variety of fixes, including changes in the source imagery. You’ll have to cook up the scale and offset on your own, though. Note that if you are writing a tracker file on SynthEyes and will then read it back in with a transform, it is easiest to write it with scale=1 and offset=0, then make changes as you read in, since if you need to try again you can retry the import, without having to reexport. Continuing with the controls, Even when missing causes a line to be output even if the tracker was not found in that frame. This permits a more accurate import, though other programs are less likely to understand the file. Similarly, the Include Outcome Codes checkbox controls whether or not a small numeric code appears on each line that indicates what was found; it permits a more accurate import, though is less likely to be understood elsewhere. The 2-D tracks box controls whether or not the raw 2-D tracking data is output; this is not necessarily mandatory, as you’ll see. The 3-D tracks box controls whether or not the 3-D path of each tracker is included―this will be the 2-D path of the solved 3-D position, and is quite smooth. In the example, 3-D paths are exported and 2-D paths are not, which is the reverse of the default. When the 3-D paths are exported, an extra Suffix for 3-D can be added to the tracker names; usually this is _3D, so that if both are output, you can tell which is which. Finally, the Extra Points box controls whether or not the 2-D paths of an extra helper points in the scene are output. Importing The File/Import/Import 2-D Tracker Paths import can be used to read the output of the 2-D exporter, or from other programs as well. The import script offers a similar set of controls to the exporter The import runs roughly in reverse of the export. The frame offset is applied to the frame numbers in the file, and only those within the selected first and last frames are stored. The scale and offset can be adjusted; by default they are 1 and 0 respectively. The values of 2 and -1 shown undo the effect of the 0.5/0.5 in the example export panel. If you are importing several different tracker data files into a single moving object or camera, you may have several different trackers all named Tracker1, for example, and after combining the files, this would be undesirable. Instead, by turning on Force unique names, each would be assigned a new unique name. Of course, if you have done supervised tracking in some different files to combine, you might well leave it off, to combine the paths together. If the input data file contains data only for frames where a tracker has been found, the tracker will still be enabled past the last valid frame. By turning on Truncate enables after last, the enable will be turned off after the last valid frame. After each tracker is read, it is locked up. You can unlock and modify it as necessary. The tracking data file contains only the basic path data, so you will probably want to adjust the tracker size, search size, etc. If you will be writing your own tracker data file for this script to import, note that the lines must be sorted so that the lines for each specific tracker are contiguous, and sorted in order of ascending frame number. This convention makes everyone s scripts easier. Also, note that the tracker names in the file never contain spaces, they will have been changed to underscores. Transferring 3-D Paths The path of a camera or object can be exported into a plain file containing a frame number, 3 positions, 3 rotations, and an optional zoom channel (field of view or focal length). Like the 2-D exporter, the File/Export/Plain Camera Path exporter provides a variety of options First Frame. First frame to export Last Frame. Last frame to export. Frame Offset. Add this value to the frame number before storing it in the file. World Scaling. Multiplies the X,Y, Z coordinates, making the path bigger or smaller. Axis Mode. Radio-buttons for Z Up; Y Up, Right; Y Up, Left. Adjust to select the desired output alignment, overriding the current SynthEyes scene setting. Rotation Order. Radio buttons XYZ or ZXY. Controls the interpretation of the 3 rotation angles in the file. Zoom Channel. Radio buttons None, Field of View, Vertical Field of View, Focal Length. Controls the 7th data channel, namely what kind of field of view data is output, if any. Look the other way. SynthEyes camera looks along the –Z axis; some systems have the camera look along +Z. Select this checkbox for those other systems. The 3-D path importer, File/Import/Camera/Object Path, has the same set of options. Though this seems redundant, it lets the importer read flexibly from other packages. If you are writing from SynthEyes and then reading the same data back in, you can leave the settings at their defaults on both export and import (unless you want to time-shift too, for example). If you are changing something, usually it is best to do it on the import, rather than the export. Writing 3-D Tracker Positions You can output the trackers’ 3-D positions using the File/Export/Plain Trackers script with these options Tracker Names. Radio buttons At beginning, At end of line, None. Controls where the tracker names are placed on each output line. The end of line option allows tracker names that contain spaces. Spaces are changed to underscores if the names are at the beginning of the line. Include Extras. If enabled, any helper points are also included in the file. World Scaling. Multiplies the coordinates to increase or decrease overall scaling. Axis Mode. Temporarily changes the coordinate system setting as selected. Reading 3-D Tracker Positions On the input side, there is an File/Import/Tracker Locations option and an File/Import/Extra Points option. Neither has any controls; they automatically detect whether the name is at the beginning or end of the line. Putting the names at the end of each line is most flexible, because then there is no problem with spaces embedded in the file name. A sample file might consist of lines such as 0 0 0 Origin 10 0 0 OnXAxis 13 -5 0 OnGroundPlane 22 10 0 AnotherGroundPlane 3 4 12 LightPole When importing trackers, the coordinates are automatically set up as a seed position on the tracker. You may want to change it to a Lock constraint as well. If a tracker of the given name does not exist, a new tracker will be created.
https://w.atwiki.jp/akatonbo/pages/1093.html
Named VIPPER 作詞/39スレ716 作曲/39スレ739 He always goes there. Looking for someone. But there was nothing in the room. At last the doctor gave up all hope of saving him. How long have you been waiting? VIPPER, it s a name whom you give him. He needn t stand on formality in this house. The children were playing so happily. On Sanday. He will never go there again. Have plenty of time yet. "I ve had enough of your silly chatter." No longer can t bear this pain any longer VIPPER, it s a name whom you give him. You needn t stand on formality in this house. The children were playing so happily. On Sanday. 音源 Named VIPPER.mp3
https://w.atwiki.jp/akatonbowiki/pages/2393.html
このページはこちらに移転しました Named VIPPER 作詞/39スレ716 作曲/39スレ739 He always goes there. Looking for someone. But there was nothing in the room. At last the doctor gave up all hope of saving him. How long have you been waiting? VIPPER, it s a name whom you give him. He needn t stand on formality in this house. The children were playing so happily. On Sanday. He will never go there again. Have plenty of time yet. "I ve had enough of your silly chatter." No longer can t bear this pain any longer VIPPER, it s a name whom you give him. You needn t stand on formality in this house. The children were playing so happily. On Sanday. 音源 Named VIPPER.mp3 (このページは旧wikiから転載されました)
https://w.atwiki.jp/memotech/pages/65.html
cron-aptで自動アップデート 1.sudo cron-apt として必要なものを入れる 2.sudo vi /etc/cron-apt/config として設定を行う。 筆者例 # Configuration for cron-apt. # The cron config is located in /etc/cron.d/cron-apt # This shows the defaults. # # The command used to execute all actions. By default, apt-get is used. # Change this to /usr/bin/aptitude to use aptitude instead, which will # resolve changed Recommends (and Suggests as well, if aptitude is so # configured). You can also set other utilities (especially useful in the # config.d directory) so set some completely different tool. # OBSERVE that this tool is indended for apt-get and tools like aptitude do not # have full support for noninteractive upgrades. You may have to tune options # to not create infinit logfiles for example. APTCOMMAND=/usr/bin/apt-get # APTCOMMAND=/usr/bin/aptitude # APTCOMMAND=/usr/bin/apt-file # A path is needed for this to work. This is the default PATH. # export PATH=/sbin /bin /usr/sbin /usr/bin /usr/local/sbin /usr/local/bin # The random sleep time in seconds. This is used to prevent clients from # accessing the APT sources all at the same time and overwhelming them. # Default is 3600 seconds which means one hour. # RUNSLEEP=3600 # The minimum amount of disc space (in kB) that need to exist on the # device where temporary files are created (mktemp) to allow cron-apt # to run. If set to 0 it will always continue even if empty. # MINTMPDIRSIZE=10 # The directory where the actions is stored. ACTIONDIR="/etc/cron-apt/action.d" # The directory where configuration per action is stored. The message file # must have the same name as the action file. ACTIONCONFDIR="/etc/cron-apt/config.d" # The directory where messages that will be prepended to the email that is # sent (per action) is stored. The message file must have the same name as # the action file. # MAILMSGDIR="/etc/cron-apt/mailmsg.d" # The directory where messages that will be prepended to text that is # sent (per action) to syslog. The message file must have the same name as # the action file. # SYSLOGMSGDIR="/etc/cron-apt/syslogmsg.d" # The directory where messages that will be prepended to the error message # (per action) is stored. The message file must have the same name as # the action file. ERRORMSGDIR="/etc/cron-apt/errormsg.d" # The directory where messages that will be prepended to the log (debug) # message (per action) is stored. The message file must have the same name as # the action file. LOGMSGDIR="/etc/cron-apt/logmsg.d" # The directory where messages that will be prepended to the mail message # (per MAILON type) is stored. The message file must have the same name as # the $MAILON directive. # MAILONMSGSDIR="/etc/cron-apt/mailonmsgs" # The directory where messages that will be prepended to the syslog message # (per SYSLOGON type) is stored. The message file must have the same name as # the $SYSLOGON directive. SYSLOGONMSGSDIR="/etc/cron-apt/syslogonmsgs" # Value "" (warn if dotlockfile not installed) # "nowarn" (don t give warning if dotlockfile not installed) # NOLOCKWARN="" # The file that contains error messages. ERROR="/var/log/cron-apt/error" # The file that contains current run information # when still running the script. TEMP="/var/log/cron-apt/temp" # The logfile (for debugging). Use syslog for normal logging. LOG="/var/log/cron-apt/log" # The mail file. # MAIL="/var/log/cron-apt/mail" # The email address to send mail to. # MAILTO="root" # When to send email about the cron-apt results. # Value error (send mail on error runs) # upgrade (when packages are upgraded) # changes (mail when change in output from an action) # output (send mail when output is generated) # always (always send mail) # (else never send mail) # MAILON="error" # Value error (syslog on error runs) # upgrade (when packages is upgraded) # changes (syslog when change in output from an action) # output (syslog when output is generated) # always (always syslog) # (else never syslog) SYSLOGON="upgrade" # Value error (exit on error only) # (else never exit) # EXITON="error" # Value verbose (log everything) # always (always log) # upgrade (when packages is upgraded) # changes (log when change in output from an action) # output (log when output is generated) # error (log error runs only) # (else log nothing) # DEBUG="output" # What to do with the diff when *ON=changes. # Value prepend (prepend to the output) # append (append to the output) # only (only show the diff, not the output itself) # (else do nothing) # DIFFONCHANGES=prepend # General apt options that will be passed to all APTCOMMAND calls. # Use "-o quiet" instead of "-q" for aptitude compatibility. # OPTIONS="-o quiet=1" # You can for example add an alternative sources.list file here. # OPTIONS="-o quiet=1 -o Dir Etc SourceList=/etc/apt/security.sources.list" # If you want to allow unauthenticated and untrusted packages add the # following to your options directive. # OPTIONS="-o quiet=1 -o APT Get AllowUnauthenticated=true -o aptitude Cmdline ignore-trust-violations=yes" # additional APT configuration file that is loaded first. This can be set in # order to use a completely different APT configuration for cron-apt. See the # /usr/share/doc/cron-apt/README and apt.conf(5) for details # export APT_CONFIG=/etc/apt/cron.apt.paths # Do not run the command, if there is an error in the previous run (default). # Value error (do not run if there is an error on last run) # (else always run, remove previous error file and run) # DONTRUN="" # If this file exist cron-apt will silently exit. # REFRAINFILE=/etc/cron-apt/refrain # If this is non-empty, it will be used as the host name in subjects of # generated e-mail messages. If this is empty, the output of uname -n # will be used. HOSTNAME="jboss-file" # Ignore lines matching this regexp to determine whether changes occurred # for MAILON="changes". If empty no lines will be ignored. # Suggested value for aptitude # DIFFIGNORE="^\(Get [[ digit ]]\+\|Hit\|Ign\|Del\|Fetched\|Freed\|Reading\)[[ space ]]" # Suggested value for apt-get # DIFFIGNORE="^\(Get [[ digit ]]\+\|Hit\|Ign\)[[ space ]]" # Default # DIFFIGNORE="" 3.sudo vi /etc/cron-apt/action.d/3-download で設定を行う。ダウンロードしてapt-get upgradeを行う設定。 筆者例 autoclean -y upgrade -y -o APT Get Show-Upgraded=true 4./etc/cron.d/cron-aptで毎朝4時に行われる実行を任意の時間に変える。 閲覧数: - 更新日:2008-11-04 08 16 37 (Tue) bookmark_hatena bookmark_delicious bookmark_livedoor bookmark_yahoo bookmark_nifty technoratiに登録 Buzzurlに登録 POOKMARK Airlinesに登録 bookmark_live link_trackback リンク元一覧: #ref_list @めもてっく is licensed under a Creative Commons 表示 2.1 日本 License.
https://w.atwiki.jp/dotcom/pages/145.html
外部ログに出力もできるようになったので、じゃあ次行っちゃいましょうか。です。 ログの形をフォーマットです。 いまのままじゃただのメッセージです。 よりログっぽくです。 やっちゃいます。org.apache.log4j.PatternLayout PatternLayoutに好き勝手に設定してみる。 TTCC_CONVERSION_PATTERNで出力してみる。 好きな形で出力してみる。 やっちゃいます。 org.apache.log4j.PatternLayout これだな、これ! って、これって前に使ってたような。。。 コピペの怖いワナ。。。 こんなふうに生成して PatternLayout layout = new PatternLayout(); でAppenderで使ってる。 appender = new WriterAppender(layout,writer); じゃこのlayoutに何か設定してあげたらいいんじゃ?なんて安直に思ったり。 いやいや、newするときじゃない? PatternLayoutに好き勝手に設定してみる。 PatternLayout layout = new PatternLayout(); を PatternLayout layout = new PatternLayout(PatternLayout.TTCC_CONVERSION_PATTERN); ってしてみたの。 TTCC_CONVERSION_PATTERN っていうのは %r [%t] %p %c %x - %m%n ってパターンのこと。 最初は自分で指定しないであらかじめ用意してくれてるの使ってみようと思って。 というわけで TTCC_CONVERSION_PATTERNで出力してみる。 こんな感じで作ってみたです。前と違うところはPatternLayoutのnewでパターンを設定してるってことだけ。 public static void main(String argv[]) { PatternLayout layout = new PatternLayout(PatternLayout.TTCC_CONVERSION_PATTERN); // 出力ファイル名 String file = "sample.log"; // java.io.Writerオブジェクト // org.apache.log4j.WriterAppenderオブジェクト Writer writer = null; WriterAppender appender = null; try{ writer = new FileWriter(file,true); appender = new WriterAppender(layout,writer); }catch(IOException e){ } Logger logger = Logger.getLogger("Sample"); logger.addAppender(appender); logger.info("This is info."); System.out.println("おわったよん"); } よし。 実行結果! 0 [main] INFO Sample - This is info. って出たわ。 きゃーなんだかすてきー。 中身について考えるわ。 さっきの%mとか%nとかってやつが上みたいになるわけね。 %r [%t] %p %c %x - %m%n %rってのが アプリケーションが開始してから、ログが出力されるまでの時間をミリ秒単位で出力する。 0ってことはすぐかい! %tってのが ログを生成したスレッドの名前を出力する。 "["と"]"でかこってあるわ。これはこの形でそのまんま出力されてるー。 main。そうかメインしかないもんね。ってこれであってる? %pってのが ログの優先度を出力します。 %cってのが ログイベントのカテゴリー名を出力する。 %xってのが ログが生成されたスレッドのNDC(ネスト化診断コンテキスト) を出力する。 せんせい!よくわかりません!てか出力されてないし。 %mってのが ロギングイベントで設定されたメッセージを出力する。 私がお願いしたとおりにでています。てかコピペしたんだけど。 %nってのが プラットフォーム依存の改行文字を出力する。 次のログは改行後にでてくれるってわけね。 うーん。 これでもいい感じはするんだけど、日時とかほしいかも。 自分で作ってみましょうそうしましょう。 というわけでやってみた。 好きな形で出力してみる。 まずはちょっと簡単な形で。 上の項の new PatternLayout(PatternLayout.TTCC_CONVERSION_PATTERN); を String LogPattern = "%p %d{yyyy/MM/dd hh mm ss} %c %C %n"; PatternLayout layout = new PatternLayout(LogPattern); にしてみたら。 実行結果! INFO 2007/03/28 02 08 32 Sample Log4Jtest.test.dotcom.Log4jTest なんてでてきましたよ! %Cは実行した時のクラス名??? じゃ%Cは別のクラスで実行したら変わるのかな? さて次はこれをいろんなところで使ってみたいのです。 組み込んでみたいのです。 設定ファイルを使うやり方ってのも知りたいし。 まだまだお勉強します。